approximate bayesian inference
Approximate Bayesian Inference for a Mechanistic Model of Vesicle Release at a Ribbon Synapse
Cornelius Schröder, Ben James, Leon Lagnado, Philipp Berens
Here, we develop an approximate Bayesian inference scheme for a fully stochastic, biophysically inspired model of glutamate release at the ribbon synapse, a highly specialized synapse found in different sensory systems. The model translates known structural features of the ribbon synapse into a set of stochastically coupled equations.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > Canada (0.04)
- Asia > Middle East > Israel > Southern District > Eilat (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > Canada (0.04)
- Asia > Middle East > Israel > Southern District > Eilat (0.04)
Reparameterization invariance in approximate Bayesian inference
Current approximate posteriors in Bayesian neural networks (BNNs) exhibit a crucial limitation: they fail to maintain invariance under reparameterization, i.e. BNNs assign different posterior densities to different parametrizations of identical functions. This creates a fundamental flaw in the application of Bayesian principles as it breaks the correspondence between uncertainty over the parameters with uncertainty over the parametrized function. In this paper, we investigate this issue in the context of the increasingly popular linearized Laplace approximation. Specifically, it has been observed that linearized predictives alleviate the common underfitting problems of the Laplace approximation.
Entropy-regularized Gradient Estimators for Approximate Bayesian Inference
Effective uncertainty quantification is important for training modern predictive models with limited data, enhancing both accuracy and robustness. While Bayesian methods are effective for this purpose, they can be challenging to scale. When employing approximate Bayesian inference, ensuring the quality of samples from the posterior distribution in a computationally efficient manner is essential. This paper addresses the estimation of the Bayesian posterior to generate diverse samples by approximating the gradient flow of the Kullback-Leibler (KL) divergence and the cross entropy of the target approximation under the metric induced by the Stein Operator. It presents empirical evaluations on classification tasks to assess the method's performance and discuss its effectiveness for Model-Based Reinforcement Learning that uses uncertainty-aware network dynamics models.
- Asia > Middle East > Jordan (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > California (0.04)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
The paper uses an online approximation to MCMC to draw parameters for a Bayesian neural network. The predictive distribution under these samples is then fitted using stochastic approximation. The comparisons are to recent work on approximate Bayesian inference applied to the same models and example problems. The paper does not yet present demonstrate that these methods will push forward any particular application. The paper is a fairly natural extension of existing work.
Reviews: Approximate Inference Turns Deep Networks into Gaussian Processes
This paper demonstrates theoretically that multiple forms of approximate Bayesian inference (Laplace approximation and variational inference) for deep neural networks are equivalent to Gaussian processes. The authors formalize this connection and write out the GP covariance function corresponding to these networks, which surprisingly turns out to be the neural tangent kernel. The authors also establish a connection to the training procedure of the neural network and GPs, which is a novel contribution. There is a growing literature on the connection between neural networks and Gaussian processes, with a variety of papers establishing the connection in the infinite limit of hidden units. This paper adds nicely to that literature, developing a connection to approximate Bayesian inference.
Reviews: Approximate Bayesian Inference for a Mechanistic Model of Vesicle Release at a Ribbon Synapse
The author responses answered my questions as well as points raised by other reviewers, providing additional clarification.] This paper formulates a fully probabilistic model of the vesicle-release dynamics at the sub-cellular biophysical level in the ribbon synapse. The paper then develops a likelihood-free inference method, tests it on a synthetic dataset, and finally infers the parameters of vesicle release in the ribbon synapse from real data. Originality: The paper presents a novel combination of biophysical modeling of ribbon synapse and a likelihood-free inference of the parameters. To my knowledge, the fully stochastic modeling of the vesicle-release dynamics is itself new.
Reviews: Approximate Bayesian Inference for a Mechanistic Model of Vesicle Release at a Ribbon Synapse
This is an interesting paper on a mechanistic model of the ribbon synapse along with an ABC inference approach. Neither component is particularly novel, but the paper is thorough and compelling. The audience will likely be computationally-savvy experimental neuroscientists and those interested in applications of ABC; the former may be harder to find at NeurIPS, though they do exist. I encourage the authors to make the suggested revisions before the camera ready deadline.
Approximate Bayesian Inference for a Mechanistic Model of Vesicle Release at a Ribbon Synapse
The inherent noise of neural systems makes it difficult to construct models which accurately capture experimental measurements of their activity. While much research has been done on how to efficiently model neural activity with descriptive models such as linear-nonlinear-models (LN), Bayesian inference for mechanistic models has received considerably less attention. One reason for this is that these models typically lead to intractable likelihoods and thus make parameter inference difficult. Here, we develop an approximate Bayesian inference scheme for a fully stochastic, biophysically inspired model of glutamate release at the ribbon synapse, a highly specialized synapse found in different sensory systems. The model translates known structural features of the ribbon synapse into a set of stochastically coupled equations.
ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages
Jesson, Andrew, Lu, Chris, Gupta, Gunshi, Filos, Angelos, Foerster, Jakob Nicolaus, Gal, Yarin
This paper introduces an effective and practical step toward approximate Bayesian inference in on-policy actor-critic deep reinforcement learning. This step manifests as three simple modifications to the Asynchronous Advantage Actor-Critic (A3C) algorithm: (1) applying a ReLU function to advantage estimates, (2) spectral normalization of actor-critic weights, and (3) incorporating dropout as a Bayesian approximation. We prove under standard assumptions that restricting policy updates to positive advantages optimizes for value by maximizing a lower bound on the value function plus an additive term. We show that the additive term is bounded proportional to the Lipschitz constant of the value function, which offers theoretical grounding for spectral normalization of critic weights. Finally, our application of dropout corresponds to approximate Bayesian inference over both the actor and critic parameters, which enables prudent state-aware exploration around the modes of the actor via Thompson sampling. Extensive empirical evaluations on diverse benchmarks reveal the superior performance of our approach compared to existing on- and off-policy algorithms. We demonstrate significant improvements for median and interquartile mean metrics over PPO, SAC, and TD3 on the MuJoCo continuous control benchmark. Moreover, we see improvement over PPO in the challenging ProcGen generalization benchmark.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > New York (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.86)